Beersheba
Israeli researchers bypass facial recognition using AI-generated makeup patterns
Israeli researchers have found an apparently straightforward method to fool facial recognition software -- by applying conventional makeup to specific areas of the face according to a pattern determined by an artificial intelligence program. The study, conducted at Beersheba's Ben-Gurion University, found that when applying the computer-generated makeup pattern to test subjects, the systems were bypassed at a near 100% success rate. Twenty volunteers (10 men and 10 women) either had makeup applied to the most identifiable areas of the face according to the heatmap generated by the software, or random makeup applied, and finally no makeup at all. The test subjects then faced a real-world environment by walking down a hallway equipped with two cameras and a variety of lighting conditions. With the software-designed makeup pattern, subjects were correctly identified just 1.22% of the time, compared to 33.73% with random makeup and 47.57% without any makeup applied.
Scientists create AI that can suggest where to apply makeup to fool facial recognition
You don't have to wear a Halloween mask to avoid being detected by facial recognition software: a dab of makeup will do the trick, according to a new study. Researchers at Ben-Gurion University in Beersheba, Israel, developed an artificial intelligence that shows users where to apply some foundation or rouge to fool face-recognition algorithms into thinking they're looking at a different person. The researchers tested their scheme against ArcFace, a machine-learning model that takes two facial images and determines the likelihood they're the same person. The team used 20 volunteers (10 men and 10 women) in a real-world environment using two cameras and a variety of lighting conditions and shooting angles. Participants wearing'adversarial makeup,' as recommended by the program, tricked the system 98.8 percent of the time.
Understanding the Relationship Between AI and Cybersecurity
The first thing many of us think about when it comes to the future relationship between artificial intelligence (AI) and cybersecurity is Skynet--the fictional neural net-based group mind from the "Terminator" movie franchise. But at least one security professional (with a somewhat rosier view) suggested that AI must be understood across a broader landscape, regarding how it will influence cybersecurity and how IT can use AI to plan for future security technology purchases. Earlier this year, Dudu Mimran, chief technology officer (CTO) at Telekom Innovation Laboratories in Israel, discussed the relationship between AI and cybersecurity in a speech and subsequent blog post for the Organisation for Economic Co-operation and Development (OECD) Forum 2018. I caught up with Mimran at his office in Beersheba, Israel for an interview, which we continued later over email. "While the threat of cyberattacks powered by AI is increasingly likely, I am less concerned in the short- and midterm about machines making up their minds and being able to harm people," Mimran said.
Malware Lets a Drone Steal Data by Watching a Computer's Blinking LED
A few hours after dark one evening earlier this month, a small quadcopter drone lifted off from the parking lot of Ben-Gurion University in Beersheba, Israel. It soon trained its built-in camera on its target, a desktop computer's tiny blinking light inside a third-floor office nearby. The pinpoint flickers, emitting from the LED hard drive indicator that lights up intermittently on practically every modern Windows machine, would hardly arouse the suspicions of anyone working in the office after hours. But in fact, that LED was silently winking out an optical stream of the computer's secrets to the camera floating outside. That data-stealing drone, shown in the video below, works as a Mr. Robot-style demonstration of a very real espionage technique.